47 research outputs found

    Exploring design decisions for mutation testing

    Get PDF
    Software testing is by far the most popular technique used in industry for quality assurance. One key challenge of software testing is how to evaluate the quality of test suites in terms of their bug-finding capability. A test suite with a large number of tests, or that achieves a high statement or branch coverage, does not necessarily have a high bug-finding capability. Mutation testing is widely used in research to evaluate the quality of test suites, and it is often considered the most powerful approach for this purpose. Mutation testing proceeds in two steps. The first step is mutant generation. A mutant is a modified version of the original program obtained by applying a mutation operator. A mutation operator is a program transformation that introduces a small syntactic change to the original program. The second step of mutation testing is to run the test suite and determine which mutants are killed, i.e., which mutants lead to tests having a different output when run on them compared against running on the original program. Mutation testing produces a measure of quality of the test suite called mutation score. The mutation score of a given test suite is the percentage of mutants killed by that test suite out of the total number of generated mutants. In this dissertation, we explore three design decisions related to mutation testing and provide recommendations to researchers in those regards. First, we look into mutation operators. To provide insights about how to improve the test suites, mutation testing requires both high quality and diverse mutation operators that lead to different program behaviors. We propose the use of approximate transformations as mutation operators. Approximate transformations were introduced in the emerging area of approximate computing for changing program semantics to trade the accuracy of results for improved energy efficiency or performance. We compared three approximate transformations with a set of conventional mutation operators from the literature, on nine open-source Java subjects. The results showed that approximate transformations change program behavior differently from conventional mutation operators. Our analysis uncovered code patterns in which approximate mutants survived (i.e., were not killed) and showed the practical value of approximate transformations both for understanding code amenable to approximations and for discovering bad tests. We submitted 11 pull requests to fix bad tests. Seven have already been integrated by the developers. Second, we explore the effect of compiler optimizations on mutation testing. Multiple mutation testing tools were developed that perform mutation at different levels. More recently mutation testing has been performed at the level of compiler intermediate representation (IR), e.g., for the LLVM IR and Java bytecode/IR. Compiler optimizations are automatic program transformations applied at the IR level with the goal of improving a measure of program performance, while preserving program semantics. Applying mutations at the IR level means that mutation testing becomes more susceptible to the effects of compiler optimizations. We investigate a new perspective on mutation testing: evaluating how standard compiler optimizations affect the cost and results of mutation testing performed at the IR level. Our study targets LLVM, a popular compiler infrastructure that supports multiple source and target languages. Our evaluation on 16 Coreutils programs discovers several interesting relations between the numbers of mutants (including the numbers on equivalent and duplicated mutants) and mutation scores on unoptimized and optimized programs. Third, we perform an empirical study to compare mutation testing at the source (SRC) and IR levels. Applying mutation at different levels offers different advantages and disadvantages, and the relation between mutants at the different levels is not clear. In our study, we compare mutation testing at the SRC and IR levels, specifically in the C programming language and the LLVM compiler IR. To make the comparison fair, we develop two mutation tools that implement conceptually the same operators at both levels. We also employ automated techniques to account for equivalent and duplicated mutants, and to determine hard-tokill mutants. We carry out our study on 16 programs from the Coreutils library, using a total of 948 tests. Our results show interesting characteristics that can help researchers better understand the relationship between mutation testing at both levels. Overall, we find mutation testing to be better at the SRC level than at the IR level: the SRC level produces much fewer (non-equivalent) mutants and is thus less expensive, but the SRC level still generates a similar number of hard-to-kill mutants

    Targeted Test Generation for Actor Systems

    Get PDF
    This paper addresses the problem of targeted test generation for actor systems. Specifically, we propose a method to support generation of system-level tests to cover a given code location in an actor system. The test generation method consists of two phases. First, static analysis is used to construct an abstraction of an entire actor system in terms of a message flow graph (MFG). An MFG captures potential actor interactions that are defined in a program. Second, a backwards symbolic execution (BSE) from a target location to an "entry point" of the actor system is performed. BSE uses the MFG constructed in the first phase of our targeted test generation method to guide execution across actors. Because concurrency leads to a huge search space which can potentially be explored through BSE, we prune the search space by using two heuristics combined with a feedback-directed technique. We implement our method in Tap, a tool for Java Akka programs, and evaluate Tap on the Savina benchmarks as well as four open source projects. Our evaluation shows that the Tap achieves a relatively high target coverage (78% on 1,000 targets) and detects six previously unreported bugs in the subjects

    Translating upwards: linking the neural and social sciences via neuroeconomics

    Get PDF
    The social and neural sciences share a common interest in understanding the mechanisms that underlie human behaviour. However, interactions between neuroscience and social science disciplines remain strikingly narrow and tenuous. We illustrate the scope and challenges for such interactions using the paradigmatic example of neuroeconomics. Using quantitative analyses of both its scientific literature and the social networks in its intellectual community, we show that neuroeconomics now reflects a true disciplinary integration, such that research topics and scientific communities with interdisciplinary span exert greater influence on the field. However, our analyses also reveal key structural and intellectual challenges in balancing the goals of neuroscience with those of the social sciences. To address these challenges, we offer a set of prescriptive recommendations for directing future research in neuroeconomics

    Viral, bacterial, and fungal infections of the oral mucosa:Types, incidence, predisposing factors, diagnostic algorithms, and management

    Get PDF

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    FENICIA : a generic plasma simulation code using a flux-independent field-aligned coordinate approach

    No full text
    Ce travail porte sur le dĂ©veloppement et la vĂ©rification d’une nouvelle approche de coordonnĂ©es alignĂ©es FCI (Flux-Coordinate Independent), qui tire partie de l’anisotropie du transport dans un plasma immergĂ© dans un fort champ magnĂ©tique. Sa prise en compte dans les codes numĂ©riques permet de rĂ©duire grandement le coĂ»t de calcul nĂ©cessaire pour une prĂ©cision donnĂ©e. Une particularitĂ© de l’approche nouvellement dĂ©veloppĂ©e dans ce manuscrit est en particulier sa capacitĂ© Ă  traiter, pour la premiĂšre fois, des configurations avec point X. Toutes ces analyses ont Ă©tĂ© conduites avec FENICIA, code modulaire entiĂšrement dĂ©veloppĂ© dans le cadre de cette thĂšse, et permettant la rĂ©solution d’une classe de modĂšles gĂ©nĂ©riques. En rĂ©sumĂ©, la mĂ©thode dĂ©veloppĂ©e dans ce travail est validĂ©e. Elle peut s’avĂ©rer pertinente pour un large champ d’application dans le contexte de la fusion magnĂ©tique. Il est montrĂ© dans cette thĂšse que cette technique devrait pouvoir s’appliquer aussi bien aux modĂšles fluides que gyrocinĂ©tiques de turbulence, et qu’elle permet notamment de surmonter un des problĂšmes fondamentaux des techniques actuelles, qui peinent Ă  traiter de maniĂšre prĂ©cise la traversĂ©e de la sĂ©paratrice.The primary thrust of this work is the development and implementation of a new approach to the problem of field-aligned coordinates in magnetized plasma turbulence simulations called the FCI approach (Flux-Coordinate Independent). The method exploits the elongated nature of micro-instability driven turbulence which typically has perpendicular scales on the order of a few ion gyro-radii, and parallel scales on the order of the machine size. Mathematically speaking, it relies on local transformations that align a suitable coordinate to the magnetic field to allow efficient computation of the parallel derivative. However, it does not rely on flux coordinates, which permits discretizing any given field on a regular grid in the natural coordinates such as (x, y, z) in the cylindrical limit. The new method has a number of advantages over methods constructed starting from flux coordinates, allowing for more flexible coding in a variety of situations including X-point configurations. In light of these findings, a plasma simulation code FENICIA has been developed based on the FCI approach with the ability to tackle a wide class of physical models. The code has been verified on several 3D test models. The accuracy of the approach is tested in particular with respect to the question of spurious radial transport. Tests on 3D models of the drift wave propagation and of the Ion Temperature Gradient (ITG) instability in cylindrical geometry in the linear regime demonstrate again the high quality of the numerical method. Finally, the FCI approach is shown to be able to deal with an X-point configuration such as one with a magnetic island with good convergence and conservation properties

    Geant4 detector simulations for future HEP experiments

    No full text
    The experimental programmes planned for the next decade are driving developments in the simulation domain: these include the High Luminosity LHC project (HL-LHC), neutrino experiments (LBNF/DUNE), and studies towards future facilities such as Linear Collider (ILC/CLIC) and Future Circular Collider (FCC). The complex detectors of the future, with different module- or cell-level shapes, finer segmentation, and novel materials and detection techniques, require additional features in geometry tools and bring new demands on physics coverage and accuracy within the constraints of the available computing resources. In order to achieve the desired precision in physics measurements, while avoiding that simulation dominates the systematic uncertainties, more accurate simulations and larger Monte Carlo samples will be needed. Therefore, this sets the challenge to develop more accurate models of physics interactions with affordable computing time [1]. The widely used detector simulation toolkit Geant4 [2,3] is at the core of simulation in almost every HEP experiment. In this paper, we will discuss the status of Geant4 in the context of detector R&D; for present and future facilities. We highlight, in particular, the need to review some of the physics models’ assumptions, approximations and limitations in order to increase precision, and to extend the validity of models up to future circular collider energies of the order of 100 TeV. Examples of recent improvements in electromagnetic models will be presented in detail

    3D face recognition using covariance based descriptors

    No full text
    International audienceIn this paper, we propose a new 3D face recognition method based on covariance descriptors. Unlike feature-based vectors, covariance-based descriptors enable the fusion and the encoding of different types of features and modalities into a compact representation. The covariance descriptors are symmetric positive definite matrices which can be viewed as an inner product on the tangent space of (Symd+) the manifold of Symmetric Positive Definite (SPD) matrices. In this article, we study geodesic distances on the Symd+ manifold and use them as metrics for 3D face matching and recognition. We evaluate the performance of the proposed method on the FRGCv2 and the GAVAB databases and demonstrate its superiority compared to other state of the art methods
    corecore